Rough winner-take-all for hardware oriented vector quantization algorithm
نویسندگان
چکیده
منابع مشابه
Winner-Take-All Autoencoders
In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of ...
متن کاملLearning vector quantization: The dynamics of winner-takes-all algorithms
Winner-Takes-All (WTA) prescriptions for Learning Vector Quantization (LVQ) are studied in the framework of a model situation: Two competing prototype vectors are updated according to a sequence of example data drawn from a mixture of Gaussians. The theory of on-line learning allows for an exact mathematical description of the training dynamics, even if an underlying cost function cannot be ide...
متن کاملWinner-take-all price competition
This paper examines the competitiveness of winner-take-all price competition in homogeneous product oligopoly environments where underlying buyer demands and/or firms’ costs need not be continuous. Our analysis is motivated by the observation that a variety of economic settings have these features. For example, in 1996 an Ivy League university solicited bids from several vendors for its initiat...
متن کاملWinner-Take-All EM Clustering
The EM algorithm is often used with mixture models to cluster data, but for efficiency reasons it is sometimes desirable to produce hard clusters. Several hard clustering limits of EM are known. For example, k-means clustering can be derived from EM in a Gaussian mixture model by taking the limit of all variances going to zero. We present a new method of deriving Winner-Take-All versions of EM ...
متن کاملEfficient k-Winner-Take-All Competitive Learning Hardware Architecture for On-Chip Learning
A novel k-winners-take-all (k-WTA) competitive learning (CL) hardware architecture is presented for on-chip learning in this paper. The architecture is based on an efficient pipeline allowing k-WTA competition processes associated with different training vectors to be performed concurrently. The pipeline architecture employs a novel codeword swapping scheme so that neurons failing the competiti...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEICE Electronics Express
سال: 2011
ISSN: 1349-2543
DOI: 10.1587/elex.8.773